762 research outputs found

    High Volumetric Performance Supercapacitors with Controlled Nanomorphology

    Get PDF
    Supercapacitor is one of the promising energy storage devices due to its relatively higher energy density compared with dielectric capacitor and higher power density and longer cycle life time (>millions) than conventional battery. In order to satisfy various requirements for energy technologies, supercapacitors with higher energy and power densities are required. In this chapter, we improved the electrochemical performance largely compared with commercial product through controlling the nanomorphology of cells. Meanwhile, although many past research programs have focused mainly on gravimetric energy densities, here we have also devoted efforts to study and develop nanomorphologic structures to realize high volumetric energy and power densities, since device volume is another critical and key performance parameter. Moreover, fundamental studies have been carried out on the mobile ion transport and storage in the nanostructures developed in this chapter

    The Dynamics Analysis of Two Delayed Epidemic Spreading Models with Latent Period on Heterogeneous Network

    Get PDF
    Two novel delayed epidemic spreading models with latent period on scale-free network are presented. The formula of the basic reproductive number and the analysis of dynamical behaviors for the models are presented. Meanwhile, numerical simulations are given to verify the main results

    Grapy-ML: Graph Pyramid Mutual Learning for Cross-dataset Human Parsing

    Full text link
    Human parsing, or human body part semantic segmentation, has been an active research topic due to its wide potential applications. In this paper, we propose a novel GRAph PYramid Mutual Learning (Grapy-ML) method to address the cross-dataset human parsing problem, where the annotations are at different granularities. Starting from the prior knowledge of the human body hierarchical structure, we devise a graph pyramid module (GPM) by stacking three levels of graph structures from coarse granularity to fine granularity subsequently. At each level, GPM utilizes the self-attention mechanism to model the correlations between context nodes. Then, it adopts a top-down mechanism to progressively refine the hierarchical features through all the levels. GPM also enables efficient mutual learning. Specifically, the network weights of the first two levels are shared to exchange the learned coarse-granularity information across different datasets. By making use of the multi-granularity labels, Grapy-ML learns a more discriminative feature representation and achieves state-of-the-art performance, which is demonstrated by extensive experiments on the three popular benchmarks, e.g. CIHP dataset. The source code is publicly available at https://github.com/Charleshhy/Grapy-ML.Comment: Accepted as an oral paper in AAAI2020. 9 pages, 4 figures. https://www.aaai.org/Papers/AAAI/2020GB/AAAI-HeH.2317.pd

    VSA: Learning Varied-Size Window Attention in Vision Transformers

    Full text link
    Attention within windows has been widely explored in vision transformers to balance the performance, computation complexity, and memory footprint. However, current models adopt a hand-crafted fixed-size window design, which restricts their capacity of modeling long-term dependencies and adapting to objects of different sizes. To address this drawback, we propose \textbf{V}aried-\textbf{S}ize Window \textbf{A}ttention (VSA) to learn adaptive window configurations from data. Specifically, based on the tokens within each default window, VSA employs a window regression module to predict the size and location of the target window, i.e., the attention area where the key and value tokens are sampled. By adopting VSA independently for each attention head, it can model long-term dependencies, capture rich context from diverse windows, and promote information exchange among overlapped windows. VSA is an easy-to-implement module that can replace the window attention in state-of-the-art representative models with minor modifications and negligible extra computational cost while improving their performance by a large margin, e.g., 1.1\% for Swin-T on ImageNet classification. In addition, the performance gain increases when using larger images for training and test. Experimental results on more downstream tasks, including object detection, instance segmentation, and semantic segmentation, further demonstrate the superiority of VSA over the vanilla window attention in dealing with objects of different sizes. The code will be released https://github.com/ViTAE-Transformer/ViTAE-VSA.Comment: 23 pages, 13 tables, and 5 figure

    Biochar Adsorption Treatment for Typical Pollutants Removal in Livestock Wastewater: A Review

    Get PDF
    Biochar, as an high efficiency, environmental friendly, and low-cost adsorbent, is usually used as soil conditioner, bio-fuel, and carbon sequestration regent. Recently, biochar has attracted much attention in wastewater treatment field. There are plenty of studies about application of biochar to adsorb pollutants in wastewater, because of its low-cost preparation, high surface area, large pore volume, plentiful functional groups, and environmental stability. Furthermore, it can be reused due to their high treatment efficiency and resource recovery potential. As biochar can be used for adsorption of typical pollutants in livestock wastewater, it becomes a promising method to treat livestock wastewater. The preparation methods, including pyrolysis, hydrothermal carbonization, and gasification, were introduced. The applications of biochar to adsorb typical pollutants, such as organic pollutants, heavy metals, and nutrients, in livestock wastewater were present. The organic structures, surface functional groups, surface electricity, and mineral component of biochar were investigated to explain the adsorption mechanism of organic pollutants, heavy metals, and nutrients in wastewater. Finally, outlooks were made for the better use of biochar in future. The relationship of preparation parameters, structures, and adsorption performance of biochar should be discussed. The quantitative analysis for the adsorption of organic structures, surface functional groups, surface electricity, and mineral component should be performed. The disposal of post-sorption biochar should be investigated

    Vision Transformer with Quadrangle Attention

    Full text link
    Window-based attention has become a popular choice in vision transformers due to its superior performance, lower computational complexity, and less memory footprint. However, the design of hand-crafted windows, which is data-agnostic, constrains the flexibility of transformers to adapt to objects of varying sizes, shapes, and orientations. To address this issue, we propose a novel quadrangle attention (QA) method that extends the window-based attention to a general quadrangle formulation. Our method employs an end-to-end learnable quadrangle regression module that predicts a transformation matrix to transform default windows into target quadrangles for token sampling and attention calculation, enabling the network to model various targets with different shapes and orientations and capture rich context information. We integrate QA into plain and hierarchical vision transformers to create a new architecture named QFormer, which offers minor code modifications and negligible extra computational cost. Extensive experiments on public benchmarks demonstrate that QFormer outperforms existing representative vision transformers on various vision tasks, including classification, object detection, semantic segmentation, and pose estimation. The code will be made publicly available at \href{https://github.com/ViTAE-Transformer/QFormer}{QFormer}.Comment: 15 pages, the extension of the ECCV 2022 paper (VSA: Learning Varied-Size Window Attention in Vision Transformers
    • …
    corecore